Goto

Collaborating Authors

 meta-trained agent implement bayes-optimal agent


Meta-trained agents implement Bayes-optimal agents

Neural Information Processing Systems

Memory-based meta-learning is a powerful technique to build agents that adapt fast to any task within a target distribution. A previous theoretical study has argued that this remarkable performance is because the meta-training protocol incentivises agents to behave Bayes-optimally. We empirically investigate this claim on a number of prediction and bandit tasks. Inspired by ideas from theoretical computer science, we show that meta-learned and Bayes-optimal agents not only behave alike, but they even share a similar computational structure, in the sense that one agent system can approximately simulate the other. Furthermore, we show that Bayes-optimal agents are fixed points of the meta-learning dynamics. Our results suggest that memory-based meta-learning is a general technique for numerically approximating Bayes-optimal agents; that is, even for task distributions for which we currently don't possess tractable models.

  bayes-optimal agent, meta-trained agent implement bayes-optimal agent, name change, (2 more...)

Review for NeurIPS paper: Meta-trained agents implement Bayes-optimal agents

Neural Information Processing Systems

Weaknesses: Perhaps the rhetoric of the paper is a little overheated, with lots of ITALICS and claims of novelty and significance that exceed the actual findings (see below). The basic ideas do not seem all that "striking". Yes, of course if we have a parameterized policy family that includes the optimal policy by design, and we train it with feedback such that the optimal policy is the one that maximizes the feedback, then it works. Note that the "target distribution" can be thought of as an initial stochastic step in a single (PO)MDP that samples the problem parameters, so the process is learning a policy for that POMDP. Where [10] (for which the authors are of course not necessarily responsible) says "Essentially, memory-based meta-learning translates the hard problem of probabilistic sequential inference into a regression problem," this is exactly what Monte Carlo RL does.


Meta-trained agents implement Bayes-optimal agents

Neural Information Processing Systems

Memory-based meta-learning is a powerful technique to build agents that adapt fast to any task within a target distribution. A previous theoretical study has argued that this remarkable performance is because the meta-training protocol incentivises agents to behave Bayes-optimally. We empirically investigate this claim on a number of prediction and bandit tasks. Inspired by ideas from theoretical computer science, we show that meta-learned and Bayes-optimal agents not only behave alike, but they even share a similar computational structure, in the sense that one agent system can approximately simulate the other. Furthermore, we show that Bayes-optimal agents are fixed points of the meta-learning dynamics.

  bayes-optimal agent, meta-trained agent implement bayes-optimal agent